# Mathematical reasoning enhancement
Acereason Nemotron 14B GGUF
AceReason-Nemotron-14B is a mathematical and code reasoning model trained through reinforcement learning, which performs excellently in multiple mathematical and code reasoning benchmark tests.
Large Language Model
Transformers

A
QuantFactory
326
2
Acereason Nemotron 7B GGUF
AceReason-Nemotron-7B is a mathematical and code reasoning model trained based on reinforcement learning. It starts training from DeepSeek-R1-Distilled-Qwen-7B and performs excellently in multiple benchmark tests.
Large Language Model
Transformers

A
QuantFactory
326
2
Nova 0.5 R1 7B
Apache-2.0
High-performance reasoning model built on the OpenThoughts-114k-math dataset and other cognitive enhancement training sets
Large Language Model
Transformers English

N
oscar128372
18
2
Codev R1 Distill Qwen 7B
A Verilog RTL code generation model distilled from DeepSeek-R1, demonstrating outstanding performance in Verilog benchmarks
Large Language Model
Transformers

C
zhuyaoyu
154
2
Aceinstruct 72B
AceInstruct is a series of advanced SFT models, improved based on Qwen, suitable for coding, mathematics, and general tasks.
Large Language Model
Safetensors Supports Multiple Languages
A
nvidia
1,584
18
Dolphin3.0 Llama3.2 3B GGUF
A 3B-parameter large language model based on the Llama3.2 architecture, supporting English text generation tasks, quantized using llama.cpp with imatrix
Large Language Model English
D
bartowski
5,665
15
Featured Recommended AI Models